1 |
Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual Retrieval ...
|
|
|
|
BASE
|
|
Show details
|
|
2 |
Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence Encoders ...
|
|
|
|
BASE
|
|
Show details
|
|
4 |
Cross-lingual semantic specialization via lexical relation induction ...
|
|
|
|
BASE
|
|
Show details
|
|
5 |
Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization ...
|
|
|
|
BASE
|
|
Show details
|
|
6 |
Do we really need fully unsupervised cross-lingual embeddings? ...
|
|
|
|
BASE
|
|
Show details
|
|
7 |
On the relation between linguistic typology and (limitations of) multilingual language modeling ...
|
|
|
|
BASE
|
|
Show details
|
|
8 |
Cross-lingual semantic specialization via lexical relation induction
|
|
Ponti, Edoardo; Vulić, I; Glavaš, G. - : EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 2020
|
|
BASE
|
|
Show details
|
|
9 |
On the relation between linguistic typology and (limitations of) multilingual language modeling
|
|
|
|
BASE
|
|
Show details
|
|
10 |
Adversarial propagation and zero-shot cross-lingual transfer of word vector specialization
|
|
|
|
BASE
|
|
Show details
|
|
11 |
Do we really need fully unsupervised cross-lingual embeddings?
|
|
Vulić, I; Glavaš, G; Reichart, R. - : EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 2020
|
|
BASE
|
|
Show details
|
|
12 |
Towards zero-shot language modeling
|
|
Ponti, Edoardo; Vulić, I; Cotterell, R. - : EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference, 2020
|
|
BASE
|
|
Show details
|
|
14 |
Zero-shot language transfer for cross-lingual sentence retrieval using bidirectional attention model ...
|
|
|
|
BASE
|
|
Show details
|
|
15 |
Learning unsupervised multilingual word embeddings with incremental multilingual hubs ...
|
|
|
|
BASE
|
|
Show details
|
|
16 |
Specializing distributional vectors of allwords for lexical entailment ...
|
|
|
|
BASE
|
|
Show details
|
|
17 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation ...
|
|
|
|
BASE
|
|
Show details
|
|
18 |
Specializing distributional vectors of allwords for lexical entailment
|
|
|
|
BASE
|
|
Show details
|
|
19 |
Investigating cross-lingual alignment methods for contextualized embeddings with Token-level evaluation
|
|
|
|
BASE
|
|
Show details
|
|
20 |
Learning unsupervised multilingual word embeddings with incremental multilingual hubs
|
|
Heyman, G; Verreet, B; Vulić, I; Moens, MF. - : NAACL HLT 2019 - 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference, 2019
|
|
Abstract:
Recent research has discovered that a shared bilingual word embedding space can be induced by projecting monolingual word embedding spaces from two languages using a self-learning paradigm without any bilingual supervision. However, it has also been shown that for distant language pairs such fully unsupervised self-learning methods are unstable and often get stuck in poor local optima due to reduced isomorphism between starting monolingual spaces. In this work, we propose a new robust framework for learning unsupervised multilingual word embeddings that mitigates the instability issues. We learn a shared multilingual embedding space for a variable number of languages by incrementally adding new languages one by one to the current multilingual space. Through the gradual language addition our method can leverage the interdependencies between the new language and all other languages in the current multilingual hub/space. We find that it is beneficial to project more distant languages later in the iterative process. Our fully unsupervised multilingual embedding spaces yield results that are on par with the state-of-the-art methods in the bilingual lexicon induction (BLI) task, and simultaneously obtain state-of-the-art scores on two downstream tasks: multilingual document classification and multilingual dependency parsing, outperforming even supervised baselines. This finding also accentuates the need to establish evaluation protocols for cross-lingual word embeddings beyond the omnipresent intrinsic BLI task in future work.
|
|
URL: https://doi.org/10.17863/CAM.39779 https://www.repository.cam.ac.uk/handle/1810/292618
|
|
BASE
|
|
Hide details
|
|
|
|